Anticipating future actions based on video observations is an important task in video understanding, which would be useful for some precautionary systems that require response time to react before an event occurs. Since the input in action anticipation is only pre-action frames, models do not have enough information about the target action; moreover, similar pre-action frames may lead to different futures. Consequently, any solution using existing action recognition models can only be suboptimal. Recently, researchers have proposed using a longer video context to remedy the insufficient information in pre-action intervals, as well as the self-attention to query past relevant moments to address the anticipation problem. However, the indirect use of video input features as the query might be inefficient, as it only serves as the proxy to the anticipation goal. To this end, we propose an inductive attention model, which transparently uses prior prediction as the query to derive the anticipation result by induction from past experience. Our method naturally considers the uncertainty of multiple futures via the many-to-many association. On the large-scale egocentric video datasets, our model not only shows consistently better performance than state of the art using the same backbone, and is competitive to the methods that employ a stronger backbone, but also superior efficiency in less model parameters.
translated by 谷歌翻译
每小时,大量的视觉内容都会发布在社交媒体和用户生成的内容平台上。为了通过自然语言查询找到相关的视频,在过去几年中,文本视频检索方法受到了越来越多的关注。引入了数据增强技术,以通过应用语义保护技术(例如色彩空间或图像上的几何变换)创建新的训练样本,以提高看不见的测试示例的性能。但是,这些技术通常应用于原始数据,从而导致更多资源要求解决方案,并且还需要具有原始数据的共享性,这可能并不总是如此,例如电影或电视连续剧的剪辑中的版权问题。为了解决这个缺点,我们提出了一种多模式数据增强技术,该技术在功能空间中起作用,并通过混合语义上相似的样本来创建新的视频和字幕。我们在大型公共数据集(Epic-Kitchens-100)上实验解决方案,并对基线方法,改进的最新性能取得了可观的改进,同时进行了多次消融研究。我们在https://github.com/aranciokov/fsmmda_videoretrieval上在github上发布代码和预估计的模型。
translated by 谷歌翻译
本报告介绍了我们提交给Epic-kitchens-100多实体检索挑战2022的技术细节。为了参与挑战,我们设计了一个合奏,由不同的模型组成,该模型由两个最近开发的相关版本培训,该版本广泛使用了三胞胎损失。我们的提交在公共排行榜上可见,平均得分为61.02%NDCG和49.77%的地图。
translated by 谷歌翻译
在本报告中,我们描述了我们提交的Epic-Kitchen-100行动预期挑战的技术细节。我们的模型,高阶的复发时空变压器和带有边缘学习的消息通讯神经网络都是基于复发的架构,仅观察2.5秒的推理上下文,以形成动作预期预测。通过平均从我们建议的培训管道中编译的一组模型中的预测分数,我们在测试集上实现了强劲的性能,这是19.61%的总平均前五名召回率,在公共排行榜上被记录为第二名。
translated by 谷歌翻译
The goal of this paper is to detect objects by exploiting their interrelationships. Rather than relying on predefined and labeled graph structures, we infer a graph prior from object co-occurrence statistics. The key idea of our paper is to model object relations as a function of initial class predictions and co-occurrence priors to generate a graph representation of an image for improved classification and bounding box regression. We additionally learn the object-relation joint distribution via energy based modeling. Sampling from this distribution generates a refined graph representation of the image which in turn produces improved detection performance. Experiments on the Visual Genome and MS-COCO datasets demonstrate our method is detector agnostic, end-to-end trainable, and especially beneficial for rare object classes. What is more, we establish a consistent improvement over object detectors like DETR and Faster-RCNN, as well as state-of-the-art methods modeling object interrelationships.
translated by 谷歌翻译
Line segments are ubiquitous in our human-made world and are increasingly used in vision tasks. They are complementary to feature points thanks to their spatial extent and the structural information they provide. Traditional line detectors based on the image gradient are extremely fast and accurate, but lack robustness in noisy images and challenging conditions. Their learned counterparts are more repeatable and can handle challenging images, but at the cost of a lower accuracy and a bias towards wireframe lines. We propose to combine traditional and learned approaches to get the best of both worlds: an accurate and robust line detector that can be trained in the wild without ground truth lines. Our new line segment detector, DeepLSD, processes images with a deep network to generate a line attraction field, before converting it to a surrogate image gradient magnitude and angle, which is then fed to any existing handcrafted line detector. Additionally, we propose a new optimization tool to refine line segments based on the attraction field and vanishing points. This refinement improves the accuracy of current deep detectors by a large margin. We demonstrate the performance of our method on low-level line detection metrics, as well as on several downstream tasks using multiple challenging datasets. The source code and models are available at https://github.com/cvg/DeepLSD.
translated by 谷歌翻译
Transformers have become the state-of-the-art neural network architecture across numerous domains of machine learning. This is partly due to their celebrated ability to transfer and to learn in-context based on few examples. Nevertheless, the mechanisms by which Transformers become in-context learners are not well understood and remain mostly an intuition. Here, we argue that training Transformers on auto-regressive tasks can be closely related to well-known gradient-based meta-learning formulations. We start by providing a simple weight construction that shows the equivalence of data transformations induced by 1) a single linear self-attention layer and by 2) gradient-descent (GD) on a regression loss. Motivated by that construction, we show empirically that when training self-attention-only Transformers on simple regression tasks either the models learned by GD and Transformers show great similarity or, remarkably, the weights found by optimization match the construction. Thus we show how trained Transformers implement gradient descent in their forward pass. This allows us, at least in the domain of regression problems, to mechanistically understand the inner workings of optimized Transformers that learn in-context. Furthermore, we identify how Transformers surpass plain gradient descent by an iterative curvature correction and learn linear models on deep data representations to solve non-linear regression tasks. Finally, we discuss intriguing parallels to a mechanism identified to be crucial for in-context learning termed induction-head (Olsson et al., 2022) and show how it could be understood as a specific case of in-context learning by gradient descent learning within Transformers.
translated by 谷歌翻译
尽管存在许多减少卷积神经网络(CNN)过度拟合的方法,但仍不清楚如何自信地衡量过度拟合的程度。但是,反映过度拟合水平的度量可能非常有用,可对不同体系结构的比较和评估各种技术来应对过度拟合。由于过度拟合的神经网络倾向于记住训练数据中的噪声而不是普遍看不见的数据,因此我们研究了训练精度在增加数据扰动的存在并研究与过度拟合的联系时如何变化。尽管以前的工作仅针对标签噪声,但我们还是研究了一系列技术,以将噪声注入训练数据,包括对抗性扰动和输入损坏。基于此,我们定义了两个新的指标,可以自信地区分正确的模型和过度拟合模型。为了进行评估,我们得出了事先已知过度拟合行为的模型池。为了测试各种因素的效果,我们基于VGG和Resnet引入了架构中的几种反拟合措施,并研究其影响,包括正则化技术,训练集大小和参数数量。最后,我们通过测量模型池外几个CNN体系结构的过度拟合度来评估所提出的指标的适用性。
translated by 谷歌翻译
从不同的随机初始化开始,经过随机梯度下降(SGD)训练的神经网络通常在功能上非常相似,从而提出了一个问题,即不同的SGD溶液之间是否存在有意义的差异。 Entezari等。最近猜想,尽管初始化不同,但在考虑到神经网络的置换不变性后,SGD发现的解决方案位于相同的损失谷中。具体而言,他们假设可以将SGD找到的任何两种解决方案排列,以使其参数之间的线性插值形成一条路径,而不会显着增加损失。在这里,我们使用一种简单但功能强大的算法来找到这样的排列,使我们能够获得直接的经验证据,证明该假设在完全连接的网络中是正确的。引人注目的是,我们发现在初始化时已经存在两个网络,并且平均它们随机,但适当排列的初始化的性能大大高于机会。相反,对于卷积架构,我们的证据表明该假设不存在。特别是在大型学习率制度中,SGD似乎发现了各种模式。
translated by 谷歌翻译
我们引入了一个可扩展的框架,用于从RGB-D图像中具有很大不完整的场景覆盖率的新型视图合成。尽管生成的神经方法在2D图像上表现出了惊人的结果,但它们尚未达到相似的影像学结果,并结合了场景完成,在这种情况下,空间3D场景的理解是必不可少的。为此,我们提出了一条在基于网格的神经场景表示上执行的生成管道,通过以2.5D-3D-2.5D方式进行场景的分布来完成未观察到的场景部分。我们在3D空间中处理编码的图像特征,并具有几何完整网络和随后的纹理镶嵌网络,以推断缺失区域。最终可以通过与一致性的可区分渲染获得感性图像序列。全面的实验表明,我们方法的图形输出优于最新技术,尤其是在未观察到的场景部分中。
translated by 谷歌翻译